13 research outputs found

    Constraint-based reachability

    Get PDF
    Iterative imperative programs can be considered as infinite-state systems computing over possibly unbounded domains. Studying reachability in these systems is challenging as it requires to deal with an infinite number of states with standard backward or forward exploration strategies. An approach that we call Constraint-based reachability, is proposed to address reachability problems by exploring program states using a constraint model of the whole program. The keypoint of the approach is to interpret imperative constructions such as conditionals, loops, array and memory manipulations with the fundamental notion of constraint over a computational domain. By combining constraint filtering and abstraction techniques, Constraint-based reachability is able to solve reachability problems which are usually outside the scope of backward or forward exploration strategies. This paper proposes an interpretation of classical filtering consistencies used in Constraint Programming as abstract domain computations, and shows how this approach can be used to produce a constraint solver that efficiently generates solutions for reachability problems that are unsolvable by other approaches.Comment: In Proceedings Infinity 2012, arXiv:1302.310

    Goal-oriented test data generation for pointer programs

    Get PDF
    Goal-oriented test data generation; Constraint Logic Programming; Static Single Assignment formInternational audienceAutomatic test data generation leads to the identification of input values on which a selected path or a selected branch is executed within a program (path-oriented vs goal-oriented methods). In both cases, several approaches based on constraint solving exist, but in the presence of pointer variables only path-oriented methods have been proposed. Pointers are responsible for the existence of conditional aliasing problems that usually provoke the failure of the goal-oriented test data generation process. In this paper, we propose an overall constraint-based method that exploits the results of an intraprocedural points-to analysis and provides two specific constraint combinators for automatically generating goal-oriented test data. This approach correctly handles multi-levels stack-directed pointers that are mainly used in C programs. The method has been fully implemented in the test data generation tool INKA and first experiences in applying it to a variety of existing programs are presented

    An abstract interpretation based combinator for modeling while loops in constraint programming

    Get PDF
    Abstract. We present the w constraint combinator that models while loops in Constraint Programming. Embedded in a finite domain constraint solver, it allows programmers to develop non-trivial arithmetical relations using loops, exactly as in an imperative language style. The deduction capabilities of this combinator come from abstract interpretation over the polyhedra abstract domain. This combinator has already demonstrated its utility in constraint-based verification and we argue that it also facilitates the rapid prototyping of arithmetic constraints (e.g. power, gcd or sum).

    Goal-oriented test data generation for programs with pointer variables

    Get PDF
    Automatic test data generation leads to the identification of input values on which a selected path or a selected branch is executed within a program (path-oriented vs goaloriented methods). In both cases, several approaches based on constraint solving exist, but in the presence of pointer variables only path-oriented methods have been proposed. This paper proposes to extend an existing goal-oriented test data generation technique to deal with multi-level pointer variables. The approach exploits the results of an intraprocedural flow-sensitive points-to analysis to automatically generate goal-oriented test data at the unit testing level. Implementation is in progress and a few examples are presented. 1

    Improving Constraint-Based Testing with Dynamic Linear Relaxations

    Get PDF
    International audienceConstraint-Based Testing (CBT) is the process of generating test cases against a testing objective by using constraint solving techniques. In CBT, testing objectives are given under the form of properties to be satisfied by program's input/output. Whenever the program or the properties contain disjunctions or multiplications between variables, CBT faces the problem of solving non-linear constraint systems. Currently, existing CBT tools tackle this problem by exploiting a finite-domains constraint solver. But, solving a non-linear constraint system over finite domains is NP hard and CBT tools fail to handle properly most properties to be tested. In this paper, we present a CBT approach where a finite domain constraint solver is enhanced by Dynamic Linear Relaxations (DLRs). DLRs are based on linear abstractions derived during the constraint solving process. They dramatically increase the solving capabilities of the solver in the presence of non-linear constraints without compromising the completeness or soundness of the overall CBT process. We implemented DLRs within the CBT tool TAUPO that generates test data for programs written in C. The approach has been validated on difficult non-linear properties over a few (academic) C programs

    Data Mining and Cross-checking of Execution Traces : A re-intepretation of Jones, Harrold and Stasko test information visualization (Long version)

    Get PDF
    The current trend in debugging and testing is to cross-check information collected during several executions. Jones et al., for example, propose to use the instruction coverage of passing and failing runs in order to visualize suspicious statements. This seems promising but lacks a formal justification. In this paper, we show that the method of Jones et al. can be re-interpreted as a data mining procedure. More particularly, the suspicion indicator they define can be rephrased in terms of well-known metrics of the data-mining domain. These metrics characterize association rules between data. With this formal framework we are able to explain limitations of the above indicator. Three significant hypotheses were implicit in the original work. Namely, 1) there exists at least one statement that can be considered as faulty ; 2) the values of the suspicion indicator for different statements should be independent from each others; 3) executing a faulty statement leads most of the time to a failure. We show that these hypotheses are hard to fulfill and that the link between the indicator and the correctness of a statement is not straightforward. The underlying idea of association rules is, nevertheless, still promising, and our conclusion emphasizes some possible tracks for improvement
    corecore